325 research outputs found

    THE EFFECT OF IMAGE, PROMOTION, COMMUNICATION, AND FACILITIES ON SENIOR HIGHT SCHOOL STUDENT’S INTEREST

    Get PDF
    The research data is obtained trough questioners from 320 students senior high school and vocational school around the campus of STIKOM MuhammadiyahBatam. This analysis of the units by using research of associative quantitative model of multiple correlation which test the result of calculation data obtained that fourth variable; image, promotion, communication, and facilities together have a significant effect on students interest to STIKOM MuhammadiyahBatam. there is a positive direct effect of image on the interest of students of Senior High School entrance into STIKOM MuhammadiyahBatam City, if the value of t_hitung>t_table, From the analysis results can be compared that the value t_hount>t_table (14.771> 1.976) ,. The result of analysis obtained t_hitung equal to 10,071 bigger value with t_tabel value equal to 1,976, 10,071> 1,976, hence decision taken is there is direct positive influence of promotion to interest of student of School of Advanced Up. The result of analysis obtained t_count 4,271 value bigger when compared with t_table value equal to 1,976, 4,271> 1,976, hence decision taken is there is positive direct influence of communication to student interest value of F_count equal to 2245.199 bigger when compared with value of F_table 2,400 ( 2245.199> 2.400), The conclusion by increasing image, promotion, communication, and facilities will increase students interest to STIKOM MuhammadiyahBatam can be increased by effort of image, promotion, communication, and facilities. Keywords: Interest, Image, Promotion, Communication And  Facilitie

    Rules for deriving efficient independent transaction

    Get PDF
    A transaction is a collection of operations that performs a single logical function in a database application. Each transaction is a unit of both atomicity and consistency. Thus, transactions are required not to violate any database consistency constraints. In most cases, the update operations in a transaction are executed sequentially. The effect of a single operation in a transaction potentially may be changed by another operation in the same transaction. This implies that the sequential execution sometimes does some redundant work. A transaction with a set of update operations is order dependent if and only if the execution of the transaction following the serialibility order as in the transaction produce an output which will be different than the output produced by interchanging the operations in the transaction. In [8], a transaction is an order dependent transaction if and only if it contains at least two conflicting update operations, i.e. updates that operate on the same data item. In our work, we have identified that there are cases where an update operation operates on a data item that is part of a set of data items operate by the other update operation. Such transaction is known as partly order dependent transaction. An order independent transaction has an important advantage of its update statements being executed in parallel without considering their relative execution orders. With an order independent transaction we can consider its single updates in an arbitrary order. Furthermore, executing these update operations in parallel can reduce the execution time. In this paper, we present rules that can be applied to generate order independent transaction given an order dependent or partly order dependent transaction. In addition, we have identified several rules that can be applied to eliminate redundant and subsumed operations that can incur unnecessary execution cost

    A strategy for semantic integrity checking in distributed databases

    Get PDF
    Integrity constraints represent knowledge about data with which a database must be consistent. The process of checking constraints to ensure that the update operations or transactions which alter the database will preserve its consistency has proved to be extremely difficult to implement efficiently, particularly in a distributed environment. In the literature, most of the approaches/methods proposed for finding/deriving a good set of integrity constraints concentrate on deriving simplified forms of the constraints by analyzing both the syntax of the constraints and their appropriate update operations. These methods are based on syntactic criteria and are limited to simple types of integrity constraints. Also, these methods are only able to produce one integrity test for each integrity constraint. In Ibrahim, Gray, and Fiddian (1997), we introduced an integrity constraint subsystem for a relational distributed database. The subsystem consists of several techniques necessary for efficient constraint checking, particularly in a distributed environment where data distribution is transparent to application domain. However, the technique proposed for generating integrity tests is limited to several types of integrity constraints, namely: domain, key, referential and simple general semantic constraint and only produced two integrity tests (global and local) for a given integrity constraint. In this paper, we present a technique for deriving several integrity tests for a given integrity constraint where the following types of integrity constraints are considered: static and transition constraints

    Checking integrity constraints - how it differs in centralized, distributed and parallel databases

    Get PDF
    An important aim of a database system is to guarantee database consistency, which means that the data contained in a database is both accurate and valid. Integrity constraints represent knowledge about data with which a database must be consistent. The process of checking constraints to ensure that update operations or transactions which alter the database will preserve its consistency has proved to be extremely difficult to implement, particularly in distributed and parallel databases. In distributed databases the aim of the constraint checking is to reduce the amount of data needing to be accessed, the number of sites involved and the amount of data transferred across the network. In parallel databases the focus is on the total execution time taken in checking the constraints. This paper highlights the differences between centralized, distributed and parallel databases with respect to constraint checking

    Deriving global and local integrity rules for a distributed database

    Get PDF
    An important aim of a database system is to guarantee database consistency, which means that the data contained in a database is both accurate and valid. Integrity constraints represent knowledge about data with which a database must be consistent. The process of checking constraints to ensure that update operations or transactions which alter the database will preserve its consistency has proved to be extremely difficult to implement, particularly in a distributed database. In this paper, we describe an enforcement algorithm based on the rule mechanisms for a distributed database which aims at minimising the amount of data that has to be accessed or transferred across the underlying network by maintaining the consistency of the database at a single site, i.e. at the site where the update is to be performed. Our technique referred to as the integrity test generation, derives global and local integrity rules has effectively reduced the cost of constraint checking in a distributed environment

    Cost and Performance Analysis of Integrity Validation Techniques for a Distributed Database

    Get PDF
    A principal problem with the use of integrity constraints for monitoring the integrity of a dynamically changing database is their cost of evaluation. This cost which is associated with the performance of the checking mechanisms is the main quantitative measure which has to be supervised carefully. We have developed an integrity constraint subsystem for a relational distributed database (SICSDD) which consists of several techniques that are necessary for efficient constraint checking, particularly in a distributed environment where data distribution is transparent to the application domain. In this paper, we will show how these techniques have effectively reduced the cost of constraint checking in such a distributed environment

    Inferring functional dependencies for XML storage

    Get PDF
    XML allows redundancy of data with its hierarchical structure where its elements may be nested and repeated. This will make the same information appear in more than one place; in fact it allows the same elements appear at different sub-trees. With this capability, XML is easier to understand and to parse, while to recover this information would require less joins. This is in contrast to relational data for which the normalized theory has been developed for eliminating data redundancy. Therefore how to detect redundancy in XML data is important before mapping can be done. In this paper, we use functional dependencies to detect data redundancies in XML documents. Based on inferring other functional dependencies from the given ones, we proposed an algorithm for mapping XML DTDs to relational schemas. The result is a “good relational schema” in terms of reducing data redundancy and preserving the semantic constraints

    Security policy integration based on role-based access control model in healthcare collaborative environments

    Get PDF
    Recently research is focused on security policy integration and conflict reconciliation among various healthcare organizations. However, challenging security and privacy risks issues still arisen during sharing sensitive patient data in different large distributed organizations.In this paper, we proposed an approach for integrating security policies based on Role-Based Access Control (RBAC) policy model that supports dynamic constraint rules and meta data information, reduces policy redundancy and resolves conflicts based on the types of policy redundancy and conflict.We believe this work can support dynamic updates and control policies in collaborative environments

    Transaction decomposition technique

    Get PDF
    A transaction is a collection of operations that performs a single logical function in a database application. Each transaction is a unit of both atomicity and consistency. Thus, transactions are required not to violate any database consistency constraints. In most cases, the update operations in a transaction are executed sequentially. The effect of a single operation in a transaction potentially may be changed by another operation in the same transaction. This implies that the sequential execution sometimes does some redundant work. It is the transaction designer’s responsibility to define properly the various transactions so that it preserves the consistency of the database. In the literature, three types of faults have been identified in transactions, namely: inefficient, unsafe and unreliable. In this paper, we present a technique that can be applied to generate subtransactions to exploit parallelism. In our work, we have identified four types of relationships which can occur in a transaction. They are: redundancy, subsumption, dependent and independent. By analysing these relationships, the transaction can be improved and inefficient transactions can be avoided. Furthermore, generating subtransactions and executing them in parallel can reduce the execution time

    Finer garbage collection in LINDACAP.

    Get PDF
    As open systems persist, garbage collection (GC) can be a vital aspect in managing system resources. Although garbage collection has been proposed for the standard Linda, it was a rather course-grained mechanism. This finer-grained method is offered in Lindacap, a capability-based coordination system for open distributed systems. Multicapabilities in Lindacap enable tuples to be uniquely referenced, thus providing sufficient information on the usability of tuples (data) within the tuple-space. This paper describes the garbage collection mechanism deployed in Lindacap, which involves selectively garbage collecting tuples within tuple-spaces. The authors present the approach using reference counting, followed by the tracing (mark-and-sweep) algorithm to garbage collect cyclic structures. A time-to-idle (TTI) technique is also proposed, which allows for garbage collection of multicapability regions that are being referred to by agents but are not used in a specified length of time. The performance results indicate that the incorporation of garbage collection techniques adds little overhead to the overall performance of the system. The difference between the average overhead caused by the mark-and-sweep and reference counting is small, and can be considered insignificant if the benefits brought by the mark-and-sweep is taken into account
    corecore